Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

POS efficient l1 polling #2506

Draft
wants to merge 19 commits into
base: main
Choose a base branch
from
Draft

POS efficient l1 polling #2506

wants to merge 19 commits into from

Conversation

tbro
Copy link
Contributor

@tbro tbro commented Jan 29, 2025

Closes ##2491

This PR:

Asynchronously updates stake table cache by listening for L1Event::NewHead. This happens in a second update loop thread (added to L1Client.update_tasks). On each such event, stake table events starting at L1State.snapshot.head and ending on the block received in the NewHead event. This should be safe since L1State.snapshot.head in initialized to 0. So the first such event will be a large update from block 0 to current head. Following updates will be small.

state.snapshot.head
};
while let Some(event) = events.next().await {
let L1Event::NewHead { head } = event else {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the latest L1 block (that may not be finalized yet). We should only consider finalized blocks for the stake table because otherwise the stake table may change on L1 after we fetched it.


/// Divide the range `start..=end` into chunks of size
/// `events_max_block_range`.
fn chunky2(start: u64, end: u64, chunk_size: usize) -> Vec<(u64, u64)> {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about returning a Range instead of (u64, u64)?

@sveitser
Copy link
Collaborator

sveitser commented Feb 3, 2025

Need to check if the limit settings we have right now make sense. Or if we should switch to using subscriptions. For alchemy the limits are documented here: https://docs.alchemy.com/reference/eth-getlogs

@sveitser
Copy link
Collaborator

sveitser commented Feb 3, 2025

I think for subscriptions there is no way to subscribe to only finalized L1 events. I might be wrong about this but couldn't find it. So we would have to do some extra work to make sure we don't consider anything that isn't finalized.

If we use HTTP then I think it should be possible handle the errors we get when we fetch to many events and fetch less. For that I think we can check what HTTP errors are returned from alchemy and infura (we use both providers at the moment) and handle those.

@sveitser
Copy link
Collaborator

sveitser commented Feb 3, 2025

Hmm, both seem to return HTTP 200 for too large requests, and not the same error code

infura

{
  "jsonrpc": "2.0",
  "id": 1,
  "error": {
    "code": -32005,
    "data": {
      "from": "0x4B5749",
      "limit": 10000,
      "to": "0x4B7EEC"
    },
    "message": "query returned more than 10000 results. Try with this block range [0x4B5749, 0x4B7EEC]."
  }
}

alchemy

{
  "jsonrpc": "2.0",
  "id": 1,
  "error": {
    "code": -32602,
    "message": "Log response size exceeded. You can make eth_getLogs requests with up to a 2K block range and no limit on the response size, or you can request any block range with a cap of 10K logs in the response. Based on your parameters and the response size limit, this block range should work: [0x0, 0x4b6b35]"
  }
}

So I wonder if we should try like 5 more times with smaller request sizes as long as we do get a successful response from the RPC and then give up, because it's then likely not the response size that is the problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants